Community Q&A | Episode 17
Description
Register for FREE Infosec Webcasts, Anti-casts & Summits –
https://poweredbybhis.com
Community Q&A | Episode 17
In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity.
They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models.
They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.
----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
- (00:00 ) - Intro
- (01:10 ) - What is a system prompt? How is it different from a user prompt?
- (03:35 ) - What are some common system prompt mistakes?
- (06:54 ) - Does repeating a prompt give different responses? (non-deterministic)
- (07:56 ) - The temperature knob effect
- (12:18 ) - When should I use AI? When should I not?
- (16:47 ) - What are best practices to reduce hallucinations?
- (20:29 ) - End-user temperature knob work-around
- (22:55 ) - AI bots that rewrite their code to avoid shutdown commands
- (26:53 ) - NCSL.org - Updates on legislation affecting AI
- (29:44 ) - How do we detect AI deep fakes?
- (30:00 ) - Derek’s DeepFake demo video
- (30:38 ) - DISCLAIMER - Do Not use AI deep fakes to break the law!
- (31:29 ) - F5-tts.org - Deep fake website
- (35:02 ) - Derek pranks his family using AI























